翻訳と辞書
Words near each other
・ Induction-recursion (type theory)
・ Inductionism
・ Inductive amplifier
・ Inductive Automation
・ Inductive bias
・ Inductive charging
・ Inductive cleavage
・ Inductive coupling
・ Inductive data type
・ Inductive dimension
・ Inductive discharge ignition
・ Inductive effect
・ Inductive functional programming
・ Inductive logic programming
・ Inductive output tube
Inductive probability
・ Inductive programming
・ Inductive pump
・ Inductive reasoning
・ Inductive reasoning aptitude
・ Inductive sensor
・ Inductive set
・ Inductive transfer
・ Inductive type
・ Inductively coupled plasma
・ Inductively coupled plasma atomic emission spectroscopy
・ Inductively coupled plasma mass spectrometry
・ Inductivism
・ Inductor
・ Inductrack


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Inductive probability : ウィキペディア英語版
Inductive probability
Inductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world.
There are three sources of knowledge: inference, communication, and deduction. Communication relays information found using other methods. Deduction establishes new facts based on existing facts. Only inference establishes new facts from data.
The basis of inference is Bayes' theorem. But this theorem is sometimes hard to apply and understand. The simpler method to understand inference is in terms of quantities of information.
Information describing the world is written in a language. For example a simple mathematical language of propositions may be chosen. Sentences may be written down in this language as strings of characters. But in the computer it is possible to encode these sentences as strings of bits (1s and 0s). Then the language may be encoded so that the most commonly used sentences are the shortest. This internal language implicitly represents probabilities of statements.
Occam's razor says the "simplest theory, consistent with the data is most likely to be correct". The "simplest theory" is interpreted as the representation of the theory written in this internal language. The theory with the shortest encoding in this internal language is most likely to be correct.
== History ==

Probability and statistics was focused on probability distributions and tests of significance. Probability was formal, well defined, but limited in scope. In particular its application was limited to situations that could be defined as an experiment or trial, with a well defined population.
Bayes's theorem is named after Rev. Thomas Bayes 1701–1761. Bayesian inference broadened the application of probability to many situations where a population was not well defined. But Bayes' theorem always depended on prior probabilities, to generate new probabilities. It was unclear where these prior probabilities should come from.
Ray Solomonoff developed algorithmic probability which gave an explanation for what randomness is and how patterns in the data may be represented by computer programs, that give shorter representations of the data circa 1964.
Chris Wallace and D. M. Boulton developed minimum message length circa 1968. Later Jorma Rissanen developed the minimum description length circa 1978. These methods allow information theory to be related to probability, in a way that can be compared to the application of Bayes' theorem, but which give a source and explanation for the role of prior probabilities.
Marcus Hutter combined decision theory with the work of Ray Solomonoff and Andrey Kolmogorov to give a theory for the Pareto optimal behavior for an Intelligent agent, circa 1998.


抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Inductive probability」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.